AAAI AI-Alert for Feb 7, 2023
Microsoft Taps ChatGPT to Boost Bing--and Beat Google
Microsoft's search engine Bing is getting an AI refresh. At the company's campus in Redmond, Washington, today, executives unveiled a new version of Bing incorporating technology behind startup OpenAI's viral chatbot ChatGPT. The updates will see Bing results include smooth, written responses to queries that summarize information found on the web, and the addition of a new chatbot interface for complex queries. Satya Nadella, Microsoft's CEO, claimed the new features signal a paradigm for search. "In fact, a new race starts today," he said.
Robot: I'm sorry. Human: I don't care anymore!
Similar to human co-workers, robots can make mistakes that violate a human's trust in them. When mistakes happen, humans often see robots as less trustworthy, which ultimately decreases their trust in them. The study examines four strategies that might repair and mitigate the negative impacts of these trust violations. These trust strategies were apologies, denials, explanations and promises on trustworthiness. An experiment was conducted where 240 participants worked with a robot co-worker to accomplish a task, which sometimes involved the robot making mistakes.
Quora opens its new AI chatbot app Poe to the general public • TechCrunch
Q&A platform Quora has opened up public access to its new AI chatbot app, Poe, which lets users ask questions and get answers from a range of AI chatbots, including those from ChatGPT maker, OpenAI, and other companies like Anthropic. Beyond allowing users to experiment with new AI technologies, Poe's content will ultimately help to evolve Quora itself, the company says. Quora had first announced Poe's mobile app in December, but at the time, it required an invite to try it out. With the public launch on Friday, anyone can now use Poe's app. For now, it's available only to iOS users, but Quora says the service will arrive on other platforms in a few months.
Coming AI regulation may not protect us from dangerous AI
Offering no criteria by which to define unacceptable risk for AI systems and no method to add new high-risk applications to the Act if such applications are discovered to pose a substantial danger of harm. This is particularly problematic because AI systems are becoming broader in their utility. Only requiring that companies take into account harm to individuals, excluding considerations of indirect and aggregate harms to society. An AI system that has a very small effect on, e.g., each person's voting patterns might in the aggregate have a huge social impact. Permitting virtually no public oversight over the assessment of whether AI meets the Act's requirements.
AI models spit out photos of real people and copyrighted images
These image-generating AI models are trained on vast data sets consisting of images with text descriptions that have been scraped from the internet. The latest generation of the technology works by taking images in the data set and changing one pixel at a time until the original image is nothing but a collection of random pixels. The AI model then reverses the process to make the pixelated mess into a new image. The paper is the first time researchers have managed to prove that these AI models memorize images in their training sets, says Ryan Webster, a PhD student at the University of Caen Normandy in France, who has studied privacy in other image generation models but was not involved in the research. This could have implications for startups wanting to use generative AI models in health care, because it shows that these systems risk leaking sensitive private information.
Get Used to Face Recognition in Stadiums
Last week, the New York Attorney General's office sent Madison Square Garden Entertainment a letter demanding answers. The state's top law enforcement agency wants to know more about how the company operating Radio City Music Hall and the storied arena where the NBA's Knicks play uses a face recognition system to deny entry to certain people, and in particular lawyers representing clients in dispute with Madison Square Garden. The letter says that because the ban is thought to cover staff at 90 law firms, it may exclude thousands of people and deter them from taking on cases "including sexual harassment or employment discrimination claims." Since the face recognition system became widely known in recent weeks, MSG's management has stood squarely behind the idea of checking faces at the door with algorithms. In an unsigned statement, the company says its system is not an attack on lawyers, though some are "ambulance chasers and money grabbers."
Self-Driving Car Services Want to Expand in San Francisco Despite Recent Hiccups
Waymo has operated a driverless service in suburban Arizona since the end of 2020. But that is very different from a congested city. "If you get disabled on a quiet suburban street, you are not in anyone's way," said Matt Wansley, a professor at the Cardozo School of Law in New York who specializes in emerging automotive technologies. The Waymo car that stopped in the middle of a San Francisco intersection last week entered a very complex and busy intersection "due to temporary road closures that precluded use of the intended route," Waymo said. When a car cannot navigate a situation on its own, remote technicians can send the car additional information that can help it get going again.
Fake Pictures of People of Color Won't Fix AI Bias
Armed with a belief in technology's generative potential, a growing faction of researchers and companies aims to solve the problem of bias in AI by creating artificial images of people of color. Proponents argue that AI-powered generators can rectify the diversity gaps in existing image databases by supplementing them with synthetic images. Some researchers are using machine learning architectures to map existing photos of people onto new races in order to "balance the ethnic distribution" of datasets. Others, like Generated Media and Qoves Lab, are using similar technologies to create entirely new portraits for their image banks, "building … faces of every race and ethnicity," as Qoves Lab puts it, to ensure a "truly fair facial dataset." As they see it, these tools will resolve data biases by cheaply and efficiently producing diverse images on command.
Boston Dynamics and DHL's new robot is a hyper-efficient warehouse worker
"That's a very manual intensive job, and one that's not well-liked by many," says Sally Miller, global digital transformation officer for DHL Supply Chain, the world's largest third-party logistics company. Facing a labor shortage and high turnover in these kinds of warehouse jobs, DHL Supply Chain turned to robot maker Boston Dynamics to come up with a solution. In development for several years, the first two Stretch robots have just been deployed at an apparel company, which DHL Supply Chain declined to name, and about six more will be sent to other warehouse sites over the next three or four months. Stretch is the first robot that Boston Dynamics has purpose-built for a specific set of applications, according to Kevin Blankespoor, the company's senior vice president and general manager of warehouse robotics. His is a title that lays bare the potential Boston Dynamics sees in logistics.